Deriving Dyad-Level Interaction Representation Using Interlocutors Structural and Expressive Multimodal Behavior Features
نویسندگان
چکیده
The overall interaction atmosphere is often a result of complex interplay between individual interlocutor’s behavior expressions and joint manifestation of dyadic interaction dynamics. There is very limited work, if any, that has computationally analyzed a human interaction at the dyad-level. Hence, in this work, we propose to compute an extensive novel set of features representing multi-faceted aspects of a dyadic interaction. These features are grouped into two broad categories: expressive and structural behavior dynamics, where each captures information about within-speaker behavior manifestation, interspeaker behavior dynamics, durational and transitional statistics providing holistic behavior quantifications at the dyad-level. We carry out an experiment of recognizing targeted affective atmosphere using the proposed expressive and structural behavior dynamics features derived from audio and video modalities. Our experiment shows that the inclusion of both expressive and structural behavior dynamics is essential in achieving promising recognition accuracies across six different classes (72.5%), where structural-based features improve the recognition rates on classes of sad and surprise. Further analyses reveal important aspects of multimodal behavior dynamics within dyadic interactions that are related to the affective atmospheric scene.
منابع مشابه
Analysis and Predictive Modeling of Body Language Behavior in Dyadic Interactions From Multimodal Interlocutor Cues
During dyadic interactions, participants adjust their behavior and give feedback continuously in response to the behavior of their interlocutors and the interaction context. In this paper, we study how a participant in a dyadic interaction adapts his/her body language to the behavior of the interlocutor, given the interaction goals and context. We apply a variety of psychology-inspired body lan...
متن کاملUsing the Audio Respiration Signal for Multimodal Discrimination of Expressive Movement Qualities
In this paper we propose a multimodal approach to distinguish between movements displaying three different expressive qualities: fluid, fragmented, and impulsive movements. Our approach is based on the Event Synchronization algorithm, which is applied to compute the amount of synchronization between two low-level features extracted from multimodal data. In more details, we use the energy of the...
متن کاملThe Evaluation of Microplanning and Surface Realization in the Generation of Multimodal Acts of Communication
In this paper, we describe an application domain which requires the computational simulation of human-human communication in which one of the interlocutors has an expressive communication disorder. The importance and evaluation of a process, called here microplanning and surface realization, for such communicative agents is discussed and a related exploratory study is described.
متن کاملMultimodal Assessment of Conversational Engagement in Persons with Parkinson’s Disease
Our capacity to engage in meaningful conversations depends on a multitude of communication cues: verbal delivery of articulate and intelligible speech, tone and modulation of voice, exhibition of a range of facial expressions, and display of body gestures, among others.Parkinson’s disease diminishes verbal and non-verbal communication facilities in affected persons. Occupational therapists meas...
متن کاملBehavior Matching in Multimodal Communication Is Synchronized
A variety of theoretical frameworks predict the resemblance of behaviors between two people engaged in communication, in the form of coordination, mimicry, or alignment. However, little is known about the time course of the behavior matching, even though there is evidence that dyads synchronize oscillatory motions (e.g., postural sway). This study examined the temporal structure of nonoscillato...
متن کامل